Raph Levien
25 Jan 2014
So-called "zero delay" filters have become popular in synthesizer and audio applications, due to their accurate emulation of analog filter designs and efficient implementation. The techniques used to create zero delay filters are considered standard in the circuit emulation field, but even so the idea of creating a zero delay filter from scratch can be intimidating to those just wanting to design a decent audio filter.
This notebook presents a very easy design methodology, using cookbook tools. There are references to the literature for those wanting to dig deeper into the theory. None of these techniques are new, but it is my hope that presenting them in this form will make them more accessible.
In broad outline, the approach recommended here is:
Choose an analog filter "prototype".
Write down the filter response in state space representation.
Choose a discretization method. Bilinear transform is most popular.
Discretize (convert from continuous time to discrete time).
Implement the filter in discrete state space representation.
The resulting filter will be a reasonably (but not perfectly) accurate emulation of the analog filter. It will share the same stability properties as the prototype, as well as similar behavior under modulation.
It is presented here as an IPython notebook, so you can easily read it as a static document, but you can also load it into your IPython environment, play with the tools, and adapt the code. Have fun!
In [1]:
%pylab inline
from scipy import signal
from scipy import linalg
I'll assume you already have a good idea which analog filter you want to simulate. We're going to concentrate on the linear response, but one of the great advantages of the state space approach is that it can be extended to nonlinearities without too much trouble.
One of the most important factors is the order of the filter, which corresponds directly to the dimensionality of the state vector. This technique can handle arbitraty order filters, but direct state space evaluation is definitely most efficient for smaller orders (2 or 4 especially). If you have a very high order filter, consider decomposing it into a series of smaller filters. There are tools for doing this in transfer function space; it's particularly easy if you know where your poles and zeros are. (Search for tf2sos for more information)
We'll use three filter designs as running examples, all of which are popular as filters in analog synthesizers: a simple one-pole lowpass (also known as the "RC lowpass"), the classic two-pole "state variable filter", and the 4-pole Moog resonant lowpass filter.
This is also known as the RC filter, as it can be trivially implemented with one resistor and one capacitor (generally in audio applications you'd want an opamp to buffer the output so it's available at low impedance, but that's not central to the response of the filter).
Being only one pole, this filter is super easy to analyze. Its impulse response is just exponential decay.
(TODO: images for circuit diagrams for these filters)
See Figure 3 in https://ccrma.stanford.edu/~jos/svf/svf.pdf for a circuit diagram. This is given in highly idealized form, with integrators drawn as integrators rather than expanded out as opamps. However, this is the same as the classic state variable filter, for example, the one presented in Don Lancaster's Active Filter Cookbook, which has been the basis of many synth builds.
The Moog 4-pole filter is of course one of the classic designs. There's quite a bit of literature on it (see http://www.timstinchcombe.co.uk/index.php?pge=papers).
A good simple description of the filter is given in https://ccrma.stanford.edu/~stilti/papers/moogvcf.pdf (Analyzing the Moog VCF with Considerations for Digital Implementation, by Tom Stilson and Julious Smith). This paper presents a very simple analog model of the linear response: it is simply 4 one pole lowpass filters in series, all tuned to the same corner frequency, with a feedback path which controls resonance. Ignore the discretization in this paper, however; we'll do much better.
Once you've chosen your filter, get it into state space representation. This will be four matrices $(A, B, C, D)$ that together represent the differential equations of the filter. The general equations are:
$$ \begin{align} \mathbf{\dot{x}}(t) = & A\mathbf{x}(t) + B\mathbf{u}(t) \\ \mathbf{y}(t) = & C\mathbf{x}(t) + D\mathbf{u}(t) \end{align} $$Here, $\mathbf{u}(t)$ represents the input, $\mathbf{x}(t)$ represents the state, and $\mathbf{y}(t)$ represents the output. Such systems are known as "single input" or "multiple input" depending on the size of $\mathbf{u}$, and "single output" or "multiple output" depending on the size of $\mathbf{y}$. In this presentation, we'll concentrate on the "single input, single output" case.
Because it's just one pole, the state can be a scalar rather than a vector, so we don't even need to write this in matrix form. The differential equation is:
$ \dot{x}(t) = -\alpha x(t) + u(t) $
Variables are as in standard state variable representation. Lowpass is $y(t) = x(t)$, and highpass is $y(t) = u(t) - x(t)$.
Here, $\alpha$ controls the corner frequency. It's easiest just to normalize it to 1. The state space reprentation could hardly be simpler:
$$ \begin{align} A & = -1 \\ B & = 1 \\ C & = 1 \\ D & = 0 \end{align} $$For highpass, $C = -1$ and $D = 1$ instead.
In [2]:
def rc_lp_system():
return array([[-1]]), array([1]), array([1]), 0
def rc_hp_system():
return array([[-1]]), array([1]), array([-1]), 1
The jos paper gives the state variable representation (here we switch the order of the state variables so they respresent left-to-right flow, and to emphasize the similarity to the other filters).
$$ \begin{align} A & = \left[\begin{array}{cc} -k & -1 \\ 1 & 0 \\ \end{array}\right] \\ B & = \left[\begin{array}{c} 1 \\ 0 \\ \end{array}\right] \\ C & = [ \begin{array}{cc} 0 & 1 \end{array} ] \\ D & = 0 \end{align} $$Here, $k$ controls resonance. It is equal to $1/Q$, and ranges from 2 for no resonance to 0 for undamped resonance, i.e. self-oscillation. For a resonance control that varies from 0 to 1, simply set $k = 2 - 2 \cdot \mathit{resonance}$.
For bandpass, $C = [ 1 \: 0 ]$ instead, and for highpass, $C = [ {-k} \: {-1} ]$ and $D=1$. In fact, all "Audio EQ Cookbook" responses can easily be done with a state variable filter, making it an appealing alternative to canonical form biquads. See Second order sections in matrix form notebook for all of these recipes. Also see Solving the continuous SVF equations using trapezoidal integration and equivalent currents for a different derivation.
In [3]:
def svf_lp_system(res):
k = 2 - 2 * res
A = array([[-k, -1], [1, 0]])
B = array([1, 0])
C = array([0, 1])
D = 0
return A, B, C, D
def svf_hp_system(res):
k = 2 - 2 * res
A = array([[-k, -1], [1, 0]])
B = array([1, 0])
C = array([-k, -1])
D = 1
return A, B, C, D
The Moog filter also has a simple state-space representation. It is easy to build the total system by stitching together four one-pole lowpass filters and adding a feedback path.
$$ \begin{align} A & = \left[\begin{array}{cc} -1 & 0 & 0 & -k \\ 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ \end{array}\right] \\ B & = \left[\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array}\right] \\ C & = [ \begin{array}{cc} 0 & 0 & 0 & 1 \end{array} ] \\ D & = 0 \end{align} $$Here, k varies from 0 (no resonance) to 4 (self oscillation).
In [4]:
def moog_system(res):
k = 4 * res
A = array([[-1, 0, 0, -k], [1, -1, 0, 0], [0, 1, -1, 0], [0, 0, 1, -1]])
B = array([1, 0, 0, 0])
C = array([0, 0, 0, 1])
D = 0
return A, B, C, D
It's possible to accurately compute the frequency and phase response of a state space filter, without for example having to expand out an impulse response and compute a fourier transform.
The transfer function of a filter in (continuous) state space formulation is simple:
$$ H(s) = D + C(sI-A)^{-1}B $$The Wikipedia article on State space representation isn't a bad source for this: http://en.wikipedia.org/wiki/State_space_representation
We can use it to plot out the frequency response of some of the filters we defined above. (Note: if you're interested in phase response, just plot the angle(response) instead of abs(response)).
In [5]:
def transfer(sys, s):
A, B, C, D = sys
return D + dot(C, dot(inv(s * identity(len(A)) - A), B))
In [6]:
xs = logspace(-3, 0, num=200)
def db(x):
return 20 * log10(abs(x))
def freqz_ss_continuous(sys, f):
return array([transfer(sys, 1j * x / f) for x in xs])
The one-pole lowpass and highpass filters have the expected response, with both at -3dB at the corner frequency, and 6 dB/octave rolloff.
In [7]:
f = sqrt(1e-3)
semilogx(xs, db(freqz_ss_continuous(rc_lp_system(), f)))
semilogx(xs, db(freqz_ss_continuous(rc_hp_system(), f)))
show()
Now we plot the response of the Moog filter model, and also see the expected response.
In [8]:
for i in range(5):
sys = moog_system(0.2 * i)
semilogx(xs, db(freqz_ss_continuous(sys, f)))
By far, the most popular discretization method in audio applications is the bilinear transform. A second, valid choice is step invariance. These two are the most popular discretization methods, though others exist, as they both preserve the stability properties of the original filter.
A very simple discretization is forward Euler. You'll see this quite a bit in older papers (particularly if they seem to be struggling with how to accommodate the delay-free feedback path), but it's basically rubbish. It works okay at very low frequencies, but can easily transform a stable filter into an unstable one (as happens in the discretized state variable filter), or totally lose the resonance properties (as in the Moog). It turns out to be totally fine for the one-pole lowpass, so you'll see it there, and as a consequence works for the Moog filter in the zero resonance case, because that's just four of those strung together.
Both bilinear transform and step invariant lose accuracy, but in different ways. The bilinear transform has exactly the same frequency response as the analog prototype, but with frequencies warped, so that infinite frequency in the analog domain maps to the Nyquist frequency. By contrast, step invariance has exactly the same impulse response assuming that the input is a step exactly one sample period wide (also known as zero order hold), but since that signal is not band-limited, it means that the frequency response is aliased. Note that it's just the response that's aliased; there are no frequencies in the output that are not present in the input.
A good general criterion for choosing is: does the frequency response have a sharp lowpass characteristic? If so, then with step invariance, the aliased components will be small, and primarily affect only the near-Nyquist frequencies. Otherwise, for example for a high-pass response, bilinear transform is a better choice. We'll show this with graphs after implementing the discretization techniques.
First, we need to pre-warp the frequency response, so that the response of the digital filter matches that of the analog filter at the cutoff frequency. Here, frequency $f$ is as a fraction of the sampling rate, so that 1/2 is the Nyquist frequency.
$ g = tan( \pi f ) $
Then, the discretization is as given by Comparing digital implementation via the bilinear and step-invariant transformations, Guofeng Zhang and Tongwen Chen.
$$ \begin{align} A_d & = (\mathbf{I} - gA)^{-1} (\mathbf{I} + gA) \\ B_d & = 2g(\mathbf{I} - gA)^{-1} B \\ C_d & = (\mathbf{I} - gA)^{-1} C \\ D_d & = g C (\mathbf{I} - gA)^{-1} B + D \end{align} $$Here, the $(\mathbf{I} - gA)^{-1}$ term represents solving the entire set of equations simultaneously, and includes the zero-delay feedback term. To me, it's much easier just to throw the system into a matrix and solve it by matrix inverse in this way, rather than solving all the individual equations by hand. Your mileage may vary.
In [9]:
def bilinear(sys, f):
A, B, C, D = sys
g = tan(pi * f)
im = inv(identity(len(A)) - g * A)
Ad = dot(im, identity(len(A)) + g * A)
Bd = 2 * dot(im, g * B)
Cd = dot(C, im)
Dd = D + dot(C, dot(im, g * B))
return Ad, Bd, Cd, Dd
Some historical notes. To my knowledge, the first publication of a proper bilinear transform of the analog Moog filter was: http://quod.lib.umich.edu/cgi/p/pod/dod-idx/preserving-the-structure-of-the-moog-vcf-in-the-digital.pdf, Preserving the Structure of the Moog VCF in the Digital Domain, Federico Fontana.
The earlier publications by Stilson and Smith, then Huovilainen, contained defective discretizations that attempted to deal with the delay free loop some other technique than simply solving the equations.
These days, however, the proper bilinear transform is standard. In synthesizer circles, the technique goes by the name "zero delay filters" and also "topology preserving transform". An important milestone in popularizing this technique was the publication of http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf, The Art of VA Filter Design, by Vadim Zavalishin.
Meanwhile, outside the published literature, there is an excellent discussion of this topic in the KVR DSP Forum thread, http://www.kvraudio.com/forum/viewtopic.php?f=33&t=349859, Cheap non-linear zero-delay filters.
The step-invariant transform represents the exact response under the assumption that the input is a sequence of steps each one sample period in width (also known as zero order hold). There are a few ways to compute it; here is the one I find easiest.
$ A' = \left[\begin{array}{cc} 0 & 0 \\ B & A \end{array}\right] $
Then,
$ \left[\begin{array}{cc} \_ & \_ \\ B_d & A_d \end{array}\right] = \exp(2 \pi f A') $
Here, $\exp(A)$ represents the matrix exponential, and the underscores respresent throwing away that part of the resulting matrix.
And
$$ \begin{align} C_d & = C \\ D_d & = D \end{align} $$
In [10]:
def step_invariant(sys, f):
A, B, C, D = sys
A_ = concatenate([[zeros(len(A) + 1)], concatenate([[B], A.T]).T])
eA = linalg.expm(2 * pi * f * A_)
Ad = eA[1:, 1:]
Bd = eA[1:, 0]
return Ad, Bd, C, D
Matrix exponential is a cookbook technique (implemented as linalg.expm in NumPy). It is defined in terms of the straightforward extension of the Taylor series for ordinary exp(x) generalized to matrices:
$$ exp(A) = \sum\limits_{k=0}^{\infty} \frac{A^k}{k!} $$If you're implementing it by yourself, for small matrices, the "scaling and squaring" technique works best. See http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.9316, The scaling and squaring method for the matrix exponential revisited, Nicholas Higham. Here's a simple implementation that computes the first n1 terms of the Taylor's series expansion for exp, followed by n2 repeated squarings.
In [11]:
# should give the same results as linalg.expm for well-behaved systems
def my_expm(A, n1 = 8, n2 = 8):
A1 = A / (1 << n2)
A2 = np.identity(len(A))
A3 = np.identity(len(A))
for i in range(1, n1):
A3 = dot(A1, A3) / i
A2 = A2 + A3
for i in range(n2):
A2 = dot(A2, A2)
return A2
Actually running a filter given in discrete time state space formulation is pretty straightforward. Mathematically, it is given as:
$$ \begin{align} \mathbf{x}(n + 1) = & A\mathbf{x}(n) + B\mathbf{u}(n) \\ \mathbf{y}(n) = & C\mathbf{x}(n) + D\mathbf{u}(n) \end{align} $$But it's probably just as easy to show the code:
In [12]:
def runss(sys, inp):
A, B, C, D = sys
x = zeros(len(A))
result = []
for u in inp:
result.append(dot(C, x) + D * u)
x = dot(A, x) + B * u
return result
We can see that the impulse response for the bilinear and step-invariant transformations of the Moog filter are pretty similar, but off in phase by a half sample or so.
In [13]:
sys = moog_system(0.8)
impulse = zeros(100)
impulse[10] = 1
plot(runss(bilinear(sys, 0.1), impulse))
plot(runss(step_invariant(sys, 0.1), impulse))
show()
Matrix-based implementations in state space can be very, very fast. The "raising" trick of Implementation of recursive digital filters into vector SIMD DSP architectures by Robelly et al for an especially useful technique for implementing second order filters on SIMD hardware tuned for 4xfloat vectors.
See Second order sections in matrix form notebook for more details on matrix implementation, including favorable comparisons of roundoff error compared to canonical direct form IIR implementations.
It's possible to accurately compute the frequency and phase response of a state space filter, without for example having to expand out an impulse response and compute a fourier transform. Of course, it doesn't hurt to plot this Fourier transform of an impulse anyway, if for no other reason than a sanity check:
In [14]:
impulse = concatenate([[1], zeros(255)])
w, h = signal.freqz(runss(bilinear(sys, 0.1), impulse))
semilogx(w, db(h))
show()
The transfer function of a filter in (discrete time) state space formulation is simple, and very similar to the continuous time version:
$$ H(z) = D + C(zI-A)^{-1}B $$So similar, in fact, we can reuse the existing transfer() function for both continuous and discrete time.
See https://ccrma.stanford.edu/~jos/filters/Transfer_Function_State_Space.html for the derivation and more discussion.
Now we'll plot the bilinear and step-invariant transforms for the Moog filter, against the analog prototype, for a single frequency. Both are quite close, although the frequency warping of the bilinear version causes its rolloff to accelerate at high frequencies.
In [15]:
def freqz_ss(sys):
return array([transfer(sys, exp(1j * pi * x)) for x in xs])
axis([1e-2, 1, -60, 10])
f = 0.05
sysd = bilinear(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
sysd = step_invariant(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
# Analog prototype
semilogx(xs, db(freqz_ss_continuous(sys, 2 * f)))
show()
The bilinear transform shows the characteristic warping of frequency response, including a zero at the Nyquist frequency.
In [16]:
axis([1e-2, 1, -60, 10])
for i in range(5):
f = 0.4 * .5 ** i
sysd = bilinear(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
semilogx(xs, db(freqz_ss_continuous(sys, 2 * f)))
show()
For this particular filter response, the step invariant transform is more accurate.
In [17]:
axis([1e-2, 1, -60, 10])
for i in range(5):
f = 0.4 * .5 ** i
sysd = step_invariant(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
semilogx(xs, db(freqz_ss_continuous(sys, 2 * f)))
show()
For highpass filters, the bilinear transform performs fairly well, albeit with noticeable effects from frequency warping at higher corner frequencies.
In [18]:
sys = svf_hp_system(0.5)
axis([1e-2, 1, -60, 10])
for i in range(5):
f = 0.4 * .5 ** i
sysd = bilinear(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
show()
However, step-invariant is not a good choice for high-pass filter responses, as expected.
In [19]:
axis([1e-2, 1, -60, 10])
for i in range(5):
f = 0.4 * .5 ** i
sysd = step_invariant(sys, f)
semilogx(xs, db(freqz_ss(sysd)))
show()
It's certainly possible to just use matrix primitives to design the filter. However, for modulating the filter parameters at a high rate, it might make sense to simplify the math symbolically. For the state variable filter, the following code goes straight to the bilinear transformed state space representation, using a by-hand symbolic solution to the matrix inverse:
In [20]:
def svf_lp(f, res):
g = tan(pi * f)
k = 2 - 2 * res
a1 = 1/(1 + g * (g + k))
a2 = g * a1
a3 = g * a2
A = array([[2*a1 - 1, -2*a2],
[2*a2, 1 - 2*a3]])
B = array([2 * a2, 2 * a3])
C = array([a2, 1 - a3])
D = a3
return A, B, C, D
Then we can check that this really does compute the same system.
In [21]:
f = 0.1
res = 0.2
print svf_lp(f, res)
print bilinear(svf_lp_system(res), f)
Simplified code for the rest of the Audio EQ cookbook designs are also at the Second order sections in matrix form notebook.
Similarly, for the Moog filter, it's not too difficult to invert the matrix symbolically. Both Vadim's book and the KVR thread on cheap zero delay filters contain these derivations.
TODO: adapt code and put here.
Real analog filters have nonlinearities, and true "virtual analog" simulations attempt to model those as well as the purely linear components. Antti's paper is excellent at presenting differential equations representing an only slightly idealized version of the Moog filter.
Really emulating the non-linearity is beyond the scope of this cookbook, but I will note that keeping the state space variables matching physical quantities (voltages across capacitors, etc.) is a huge help. There are a number of ways of simply adding the nonlinearity back in. These are covered in more detail in the "cheap zero delay" KVR thread and Vadim's book.
We have shown how to design digital IIR filters in a state space framework, starting from analog prototypes and giving some choices regarding discretization. All these techniques are cookbook and don't require solving systems of equations by hand, or doing symbolic manipulation.
These techniques work on a wide range of source filters and allow both easy, fast implementation and analysis, for example plotting the frequency response as a precise function of frequency, not requiring Fourier transforms of impulse responses.
In addition, the literature is rich in techniques for state space systems. As an example, the poles of the resulting filter are exactly the eigenvalues of the $A$ matrix. In analog form, state space matrices can be composed arbitrarily, including with feedback paths, making the basic technique powerful for circuit design and analysis.
It is my hope that these matrix formulations will make the process of designing digital filters less mysterious, and accessible to a broader audience, much in the way that Dan Lancaster's Active Filter Cookbook popularized analog filter designs, and no doubt contributed to a great many home-built designs.
In [ ]: